517 research outputs found

    Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks

    Get PDF
    Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSPs planar four-color graph coloring, maximum independent set, and Sudoku on this substrate, and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of non-saturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by non-linear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation, and also offer insight into the computational role of dual inhibitory mechanisms in neural circuits.Comment: Accepted manuscript, in press, Neural Computation (2018

    Competition through selective inhibitory synchrony

    Full text link
    Models of cortical neuronal circuits commonly depend on inhibitory feedback to control gain, provide signal normalization, and to selectively amplify signals using winner-take-all (WTA) dynamics. Such models generally assume that excitatory and inhibitory neurons are able to interact easily, because their axons and dendrites are co-localized in the same small volume. However, quantitative neuroanatomical studies of the dimensions of axonal and dendritic trees of neurons in the neocortex show that this co-localization assumption is not valid. In this paper we describe a simple modification to the WTA circuit design that permits the effects of distributed inhibitory neurons to be coupled through synchronization, and so allows a single WTA to be distributed widely in cortical space, well beyond the arborization of any single inhibitory neuron, and even across different cortical areas. We prove by non-linear contraction analysis, and demonstrate by simulation that distributed WTA sub-systems combined by such inhibitory synchrony are inherently stable. We show analytically that synchronization is substantially faster than winner selection. This circuit mechanism allows networks of independent WTAs to fully or partially compete with each other.Comment: in press at Neural computation; 4 figure

    Collective stability of networks of winner-take-all circuits

    Full text link
    The neocortex has a remarkably uniform neuronal organization, suggesting that common principles of processing are employed throughout its extent. In particular, the patterns of connectivity observed in the superficial layers of the visual cortex are consistent with the recurrent excitation and inhibitory feedback required for cooperative-competitive circuits such as the soft winner-take-all (WTA). WTA circuits offer interesting computational properties such as selective amplification, signal restoration, and decision making. But, these properties depend on the signal gain derived from positive feedback, and so there is a critical trade-off between providing feedback strong enough to support the sophisticated computations, while maintaining overall circuit stability. We consider the question of how to reason about stability in very large distributed networks of such circuits. We approach this problem by approximating the regular cortical architecture as many interconnected cooperative-competitive modules. We demonstrate that by properly understanding the behavior of this small computational module, one can reason over the stability and convergence of very large networks composed of these modules. We obtain parameter ranges in which the WTA circuit operates in a high-gain regime, is stable, and can be aggregated arbitrarily to form large stable networks. We use nonlinear Contraction Theory to establish conditions for stability in the fully nonlinear case, and verify these solutions using numerical simulations. The derived bounds allow modes of operation in which the WTA network is multi-stable and exhibits state-dependent persistent activities. Our approach is sufficiently general to reason systematically about the stability of any network, biological or technological, composed of networks of small modules that express competition through shared inhibition.Comment: 7 Figure

    From Neural Arbors to Daisies

    Get PDF
    Pyramidal neurons in layers 2 and 3 of the neocortex collectively form an horizontal lattice of long-range, periodic axonal projections, known as the superficial patch system. The precise pattern of projections varies between cortical areas, but the patch system has nevertheless been observed in every area of cortex in which it has been sought, in many higher mammals. Although the clustered axonal arbors of single pyramidal cells have been examined in detail, the precise rules by which these neurons collectively merge their arbors remain unknown. To discover these rules, we generated models of clustered axonal arbors following simple geometric patterns. We found that models assuming spatially aligned but independent formation of each axonal arbor do not produce patchy labeling patterns for large simulated injections into populations of generated axonal arbors. In contrast, a model that used information distributed across the cortical sheet to generate axonal projections reproduced every observed quality of cortical labeling patterns. We conclude that the patch system cannot be built during development using only information intrinsic to single neurons. Information shared across the population of patch-projecting neurons is required for the patch system to reach its adult stat

    Constructive connectomics: How neuronal axons get from here to there using gene-expression maps derived from their family trees

    Full text link
    During brain development, billions of axons must navigate over multiple spatial scales to reach specific neuronal targets, and so build the processing circuits that generate the intelligent behavior of animals. However, the limited information capacity of the zygotic genome puts a strong constraint on how, and which, axonal routes can be encoded. We propose and validate a mechanism of development that can provide an efficient encoding of this global wiring task. The key principle, confirmed through simulation, is that basic constraints on mitoses of neural stem cells—that mitotic daughters have similar gene expression to their parent and do not stray far from one another—induce a global hierarchical map of nested regions, each marked by the expression profile of its common progenitor population. Thus, a traversal of the lineal hierarchy generates a systematic sequence of expression profiles that traces a staged route, which growth cones can follow to their remote targets. We have analyzed gene expression data of developing and adult mouse brains published by the Allen Institute for Brain Science, and found them consistent with our simulations: gene expression indeed partitions the brain into a global spatial hierarchy of nested contiguous regions that is stable at least from embryonic day 11.5 to postnatal day 56. We use this experimental data to demonstrate that our axonal guidance algorithm is able to robustly extend arbors over long distances to specific targets, and that these connections result in a qualitatively plausible connectome. We conclude that, paradoxically, cell division may be the key to uniting the neurons of the brain

    Axons in Cat Visual Cortex are Topologically Self-similar

    Get PDF
    The axonal arbors of the different types of neocortical and thalamic neurons appear highly dissimilar when viewed in conventional 2D reconstructions. Nevertheless, we have found that their one-dimensional metrics and topologies are surprisingly similar. To discover this, we analysed the axonal branching pattern of 39 neurons (23 spiny, 13 smooth and three thalamic axons) that were filled intracellularly with horseradish peroxidase (HRP) during in vivo experiments in cat area 17. The axons were completely reconstructed and translated into dendrograms. Topological, fractal and Horton-Strahler analyses indicated that axons of smooth and spiny neurons had similar complexity, length ratios (a measure of the relative increase in the length of collateral segments as the axon branches) and bifurcation ratios (a measure of the relative increase in the number of collateral segments as the axon branches). We show that a simple random branching model (Galton-Watson process) predicts with reasonable accuracy the bifurcation ratio, length ratio and collateral length distribution of the axonal arbor

    Computation in Dynamically Bounded Asymmetric Systems

    Get PDF
    Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems

    Developmental Origin of Patchy Axonal Connectivity in the Neocortex: A Computational Model

    Get PDF
    Injections of neural tracers into many mammalian neocortical areas reveal a common patchy motif of clustered axonal projections. We studied in simulation a mathematical model for neuronal development in order to investigate how this patchy connectivity could arise in layer II/III of the neocortex. In our model, individual neurons of this layer expressed the activator-inhibitor components of a Gierer-Meinhardt reaction-diffusion system. The resultant steady-state reaction-diffusion pattern across the neuronal population was approximately hexagonal. Growth cones at the tips of extending axons used the various morphogens secreted by intrapatch neurons as guidance cues to direct their growth and invoke axonal arborization, so yielding a patchy distribution of arborization across the entire layer II/III. We found that adjustment of a single parameter yields the intriguing linear relationship between average patch diameter and interpatch spacing that has been observed experimentally over many cortical areas and species. We conclude that a simple Gierer-Meinhardt system expressed by the neurons of the developing neocortex is sufficient to explain the patterns of clustered connectivity observed experimentall

    State-Dependent Computation Using Coupled Recurrent Networks

    Get PDF
    Although conditional branching between possible behavioral states is a hallmark of intelligent behavior, very little is known about the neuronal mechanisms that support this processing. In a step toward solving this problem, we demonstrate by theoretical analysis and simulation how networks of richly interconnected neurons, such as those observed in the superficial layers of the neocortex, can embed reliable, robust finite state machines. We show how a multistable neuronal network containing a number of states can be created very simply by coupling two recurrent networks whose synaptic weights have been configured for soft winner-take-all (sWTA) performance. These two sWTAs have simple, homogeneous, locally recurrent connectivity except for a small fraction of recurrent cross-connections between them, which are used to embed the required states. This coupling between the maps allows the network to continue to express the current state even after the input that elicited that state iswithdrawn. In addition, a small number of transition neurons implement the necessary input-driven transitions between the embedded states. We provide simple rules to systematically design and construct neuronal state machines of this kind. The significance of our finding is that it offers a method whereby the cortex could construct networks supporting a broad range of sophisticated processing by applying only small specializations to the same generic neuronal circuit

    Amplifying and Linearizing Apical Synaptic Inputs to Cortical Pyramidal Cells

    Get PDF
    Intradendritic electrophysiological recordings reveal a bewildering repertoire of complex electrical spikes and plateaus that are difficult to reconcile with conventional notions of neuronal function. In this paper we argue that such dendritic events are just an exuberant expression of a more important mechanism - a proportional current amplifier whose primary task is to offset electrotonic losses. Using the example of functionally important synaptic inputs to the superficial layers of an anatomically and electrophysiologically reconstructed layer 5 pyramidal neuron, we derive and simulate the properties of conductances that linearize and amplify distal synaptic input current in a graded manner. The amplification depends on a potassium conductance in the apical tuft and calcium conductances in the apical trunk
    • …
    corecore